ai incident
Bubble, Bubble, AI's Rumble: Why Global Financial Regulatory Incident Reporting is Our Shield Against Systemic Stumbles
Gupta, Anchal, Pappyshev, Gleb, Kwok, James T
"Double, double toil and trouble; Fire burn and cauldron bubble." As Shakespeare's witches foretold chaos through cryptic prophecies, modern capital markets grapple with systemic risks concealed by opaque AI systems. According to IMF, the August 5, 2024, plunge in Japanese and U.S. equities can be linked to algorithmic trading yet absent from existing AI incidents database exemplifies this transparency crisis . Current AI incident databases, reliant on crowdsourcing or news scraping, systematically overlook capital market anomalies, particularly in algorithmic and high - frequency trading. We address this critical gap by proposing a regulatory - grade global database that elegantly synthesi s es post - trade reporting frameworks with proven incident documentation models from healthcare and aviation. Our framework's temporal data omission technique masking timestamps while preserving percentage - based metrics enables sophisticated cross - jurisdictional analysis of emerging risks while safeguarding confidential business information. Synthetic data validation ( modelled after real life published incidents, sentiments, data) (n=2,999 incidents) reveals compelling patterns: systemic risks transcending geographical boundaries, market manipulation clusters distinctly identifiable via K - means algorithms, and AI system typology exerting significantly greater influence on trading behaviour than geographical location, This tripartite solution empowers regulators with unprecedented cross - jurisdictional oversight, financial institutions with seamless compliance integration, and investors with critical visibility into previously obscured AI - driven vulnerabilities. We call for immediate action to strengthen risk management and foster resilience in AI - driven financial markets against the volatile "cauldron" of AI - driven syste m ic risks.
- Transportation > Air (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Banking & Finance > Trading (1.00)
- Banking & Finance > Economy (0.93)
A five-layer framework for AI governance: integrating regulation, standards, and certification
Agarwal, Avinash, Nene, Manisha J.
Purpose: The governance of artificial iintelligence (AI) systems requires a structured approach that connects high-level regulatory principles with practical implementation. Existing frameworks lack clarity on how regulations translate into conformity mechanisms, leading to gaps in compliance and enforcement. This paper addresses this critical gap in AI governance. Methodology/Approach: A five-layer AI governance framework is proposed, spanning from broad regulatory mandates to specific standards, assessment methodologies, and certification processes. By narrowing its scope through progressively focused layers, the framework provides a structured pathway to meet technical, regulatory, and ethical requirements. Its applicability is validated through two case studies on AI fairness and AI incident reporting. Findings: The case studies demonstrate the framework's ability to identify gaps in legal mandates, standardization, and implementation. It adapts to both global and region-specific AI governance needs, mapping regulatory mandates with practical applications to improve compliance and risk management. Practical Implications - By offering a clear and actionable roadmap, this work contributes to global AI governance by equipping policymakers, regulators, and industry stakeholders with a model to enhance compliance and risk management. Social Implications: The framework supports the development of policies that build public trust and promote the ethical use of AI for the benefit of society. Originality/Value: This study proposes a five-layer AI governance framework that bridges high-level regulatory mandates and implementation guidelines. Validated through case studies on AI fairness and incident reporting, it identifies gaps such as missing standardized assessment procedures and reporting mechanisms, providing a structured foundation for targeted governance measures.
- Law (1.00)
- Information Technology > Security & Privacy (0.87)
- Government > Military (0.68)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Applied AI (1.00)
Incorporating AI Incident Reporting into Telecommunications Law and Policy: Insights from India
Agarwal, Avinash, Nene, Manisha J.
The integration of artificial intelligence (AI) into telecommunications infrastructure introduces novel risks, such as algorithmic bias and unpredictable system behavior, that fall outside the scope of traditional cybersecurity and data protection frameworks. This paper introduces a precise definition and a detailed typology of telecommunications AI incidents, establishing them as a distinct category of risk that extends beyond conventional cybersecurity and data protection breaches. It argues for their recognition as a distinct regulatory concern. Using India as a case study for jurisdictions that lack a horizontal AI law, the paper analyzes the country's key digital regulations. The analysis reveals that India's existing legal instruments, including the Telecommunications Act, 2023, the CERT-In Rules, and the Digital Personal Data Protection Act, 2023, focus on cybersecurity and data breaches, creating a significant regulatory gap for AI-specific operational incidents, such as performance degradation and algorithmic bias. The paper also examines structural barriers to disclosure and the limitations of existing AI incident repositories. Based on these findings, the paper proposes targeted policy recommendations centered on integrating AI incident reporting into India's existing telecom governance. Key proposals include mandating reporting for high-risk AI failures, designating an existing government body as a nodal agency to manage incident data, and developing standardized reporting frameworks. These recommendations aim to enhance regulatory clarity and strengthen long-term resilience, offering a pragmatic and replicable blueprint for other nations seeking to govern AI risks within their existing sectoral frameworks.
- Telecommunications (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (1.00)
Automating AI Failure Tracking: Semantic Association of Reports in AI Incident Database
Russo, Diego, Orlando, Gian Marco, La Gatta, Valerio, Moscato, Vincenzo
Artificial Intelligence (AI) systems are transforming critical sectors such as healthcare, finance, and transportation, enhancing operational efficiency and decision-making processes. However, their deployment in high-stakes domains has exposed vulnerabilities that can result in significant societal harm. To systematically study and mitigate these risk, initiatives like the AI Incident Database (AIID) have emerged, cataloging over 3,000 real-world AI failure reports. Currently, associating a new report with the appropriate AI Incident relies on manual expert intervention, limiting scalability and delaying the identification of emerging failure patterns. To address this limitation, we propose a retrieval-based framework that automates the association of new reports with existing AI Incidents through semantic similarity modeling. We formalize the task as a ranking problem, where each report-comprising a title and a full textual description-is compared to previously documented AI Incidents based on embedding cosine similarity. Benchmarking traditional lexical methods, cross-encoder architectures, and transformer-based sentence embedding models, we find that the latter consistently achieve superior performance. Our analysis further shows that combining titles and descriptions yields substantial improvements in ranking accuracy compared to using titles alone. Moreover, retrieval performance remains stable across variations in description length, highlighting the robustness of the framework. Finally, we find that retrieval performance consistently improves as the training set expands. Our approach provides a scalable and efficient solution for supporting the maintenance of the AIID.
- North America > United States > New Jersey (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Europe > Italy > Campania > Naples (0.04)
- Asia > Middle East > Jordan (0.04)
- Law (1.00)
- Health & Medicine (1.00)
- Government > Regional Government > North America Government > United States Government (0.93)
- (2 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Text Processing (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.34)
Who is Responsible When AI Fails? Mapping Causes, Entities, and Consequences of AI Privacy and Ethical Incidents
Hadan, Hilda, Mogavi, Reza Hadi, Zhang-Kennedy, Leah, Nacke, Lennart E.
The rapid growth of artificial intelligence (AI) technologies has changed decision-making in many fields. But, it has also raised major privacy and ethical concerns. However, many AI incidents taxonomies and guidelines for academia, industry, and government lack grounding in real-world incidents. We analyzed 202 real-world AI privacy and ethical incidents. This produced a taxonomy that classifies incident types across AI lifecycle stages. It accounts for contextual factors such as causes, responsible entities, disclosure sources, and impacts. Our findings show insufficient incident reporting from AI developers and users. Many incidents are caused by poor organizational decisions and legal non-compliance. Only a few legal actions and corrective measures exist, while risk-mitigation efforts are limited. Our taxonomy contributes a structured approach in reporting of future AI incidents. Our findings demonstrate that current AI governance frameworks are inadequate. We urgently need child-specific protections and AI policies on social media. They must moderate and reduce the spread of harmful AI-generated content. Our research provides insights for policymakers and practitioners, which lets them design ethical AI. It also support AI incident detection and risk management. Finally, it guides AI policy development. Improved policies will protect people from harmful AI applications and support innovation in AI systems.
- Asia > Philippines (0.14)
- North America > United States > District of Columbia > Washington (0.14)
- Africa > Kenya (0.14)
- (23 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Standardised schema and taxonomy for AI incident databases in critical digital infrastructure
Agarwal, Avinash, Nene, Manisha J.
The rapid deployment of Artificial Intelligence (AI) in critical digital infrastructure introduces significant risks, necessitating a robust framework for systematically collecting AI incident data to prevent future incidents. Existing databases lack the granularity as well as the standardized structure required for consistent data collection and analysis, impeding effective incident management. This work proposes a standardized schema and taxonomy for AI incident databases, addressing these challenges by enabling detailed and structured documentation of AI incidents across sectors. Key contributions include developing a unified schema, introducing new fields such as incident severity, causes, and harms caused, and proposing a taxonomy for classifying AI incidents in critical digital infrastructure. The proposed solution facilitates more effective incident data collection and analysis, thus supporting evidence-based policymaking, enhancing industry safety measures, and promoting transparency. This work lays the foundation for a coordinated global response to AI incidents, ensuring trust, safety, and accountability in using AI across regions.
- South America > Chile (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Europe > Finland > Uusimaa > Helsinki (0.04)
- (3 more...)
- Law (1.00)
- Government (1.00)
- Information Technology > Security & Privacy (0.69)
- Energy > Power Industry (0.68)
Advancing Trustworthy AI for Sustainable Development: Recommendations for Standardising AI Incident Reporting
Agarwal, Avinash, Nene, Manisha J
The increasing use of AI technologies has led to increasing AI incidents, posing risks and causing harm to individuals, organizations, and society. This study recognizes and addresses the lack of standardized protocols for reliably and comprehensively gathering such incident data crucial for preventing future incidents and developing mitigating strategies. Specifically, this study analyses existing open-access AI-incident databases through a systematic methodology and identifies nine gaps in current AI incident reporting practices. Further, it proposes nine actionable recommendations to enhance standardization efforts to address these gaps. Ensuring the trustworthiness of enabling technologies such as AI is necessary for sustainable digital transformation. Our research promotes the development of standards to prevent future AI incidents and promote trustworthy AI, thus facilitating achieving the UN sustainable development goals. Through international cooperation, stakeholders can unlock the transformative potential of AI, enabling a sustainable and inclusive future for all.
- North America > United States > Virginia (0.04)
- North America > Canada (0.04)
- Asia > India > NCT > New Delhi (0.04)
- (3 more...)
- Law (1.00)
- Transportation (0.95)
- Government (0.94)
- (2 more...)
Lessons for Editors of AI Incidents from the AI Incident Database
Paeth, Kevin, Atherton, Daniel, Pittaras, Nikiforos, Frase, Heather, McGregor, Sean
As artificial intelligence (AI) systems become increasingly deployed across the world, they are also increasingly implicated in AI incidents - harm events to individuals and society. As a result, industry, civil society, and governments worldwide are developing best practices and regulations for monitoring and analyzing AI incidents. The AI Incident Database (AIID) is a project that catalogs AI incidents and supports further research by providing a platform to classify incidents for different operational and research-oriented goals. This study reviews the AIID's dataset of 750+ AI incidents and two independent taxonomies applied to these incidents to identify common challenges to indexing and analyzing AI incidents. We find that certain patterns of AI incidents present structural ambiguities that challenge incident databasing and explore how epistemic uncertainty in AI incident reporting is unavoidable. We therefore report mitigations to make incident processes more robust to uncertainty related to cause, extent of harm, severity, or technical details of implicated systems. With these findings, we discuss how to develop future AI incident reporting practices.
- Asia > Russia (0.15)
- North America > United States > Colorado (0.05)
- North America > United States > New York > New York County > New York City (0.04)
- (6 more...)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Transportation (0.95)
- (2 more...)
AI for All: Identifying AI incidents Related to Diversity and Inclusion
Shams, Rifat Ara, Zowghi, Didar, Bano, Muneera
The rapid expansion of Artificial Intelligence (AI) technologies has introduced both significant advancements and challenges, with diversity and inclusion (D&I) emerging as a critical concern. Addressing D&I in AI is essential to reduce biases and discrimination, enhance fairness, and prevent adverse societal impacts. Despite its importance, D&I considerations are often overlooked, resulting in incidents marked by built-in biases and ethical dilemmas. Analyzing AI incidents through a D&I lens is crucial for identifying causes of biases and developing strategies to mitigate them, ensuring fairer and more equitable AI technologies. However, systematic investigations of D&I-related AI incidents are scarce. This study addresses these challenges by identifying and understanding D&I issues within AI systems through a manual analysis of AI incident databases (AIID and AIAAIC). The research develops a decision tree to investigate D&I issues tied to AI incidents and populate a public repository of D&I-related AI incidents. The decision tree was validated through a card sorting exercise and focus group discussions. The research demonstrates that almost half of the analyzed AI incidents are related to D&I, with a notable predominance of racial, gender, and age discrimination. The decision tree and resulting public repository aim to foster further research and responsible AI practices, promoting the development of inclusive and equitable AI systems.
- Oceania > Australia (0.14)
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.67)
- Information Technology (1.00)
- Health & Medicine (1.00)
- Transportation > Air (0.93)
- Law > Civil Rights & Constitutional Law (0.88)
UK needs system for recording AI misuse and malfunctions, thinktank says
The UK needs a system for recording misuse and malfunctions in artificial intelligence or ministers risk being unaware of alarming incidents involving the technology, according to a report. The next government should create a system for logging incidents involving AI in public services and should consider building a central hub for collating AI-related episodes across the UK, said the Centre for Long-Term Resilience (CLTR), a thinktank. CLTR, which focuses on government responses to unforeseen crises and extreme risks, said an incident reporting regime such as the system operated by the Air Accidents Investigation Branch (AAIB) was vital for using the technology successfully. The report cites 10,000 AI "safety incidents" recorded by news outlets since 2014, listed in a database compiled by the Organisation for Economic Co-operation and Development, an international research body. Examples logged on the OECD's AI safety incident monitor include a deepfake of the Labour leader, Keir Starmer, purportedly being abusive to party staff, Google's Gemini model portraying German second world war soldiers as people of colour, incidents involving self-driving cars and a man who planned to assassinate the late queen drawing encouragement from a chatbot.
- Europe > United Kingdom (0.73)
- Europe > Netherlands (0.05)
- Europe > Estonia > Harju County > Tallinn (0.05)
- Transportation > Air (0.73)
- Government > Voting & Elections (0.56)
- Government > Military (0.56)
- Government > Regional Government (0.54)